Goto

Collaborating Authors

 principal function


On Prior Distributions for Orthogonal Function Sequences

Sugasawa, Shonosuke, Mochihashi, Daichi

arXiv.org Machine Learning

We propose a novel class of prior distributions for sequences of orthogonal functions, which are frequently required in various statistical models such as functional principal component analysis (FPCA). Our approach constructs priors sequentially by imposing adaptive orthogonality constraints through a hierarchical formulation of conditionally normal distributions. The orthogonality is controlled via hyperparameters, allowing for flexible trade-offs between exactness and smoothness, which can be learned from the observed data. We illustrate the properties of the proposed prior and show that it leads to nearly orthogonal posterior estimates. The proposed prior is employed in Bayesian FPCA, providing more interpretable principal functions and efficient low-rank representations. Through simulation studies and analysis of human mobility data in Tokyo, we demonstrate the superior performance of our approach in inducing orthogonality and improving functional component estimation.


Correspondence Analysis Using Neural Networks

Hsu, Hsiang, Salamatian, Salman, Calmon, Flavio P.

arXiv.org Machine Learning

Correspondence analysis (CA) is a multivariate statistical tool used to visualize and interpret data dependencies. CA has found applications in fields ranging from epidemiology to social sciences. However, current methods used to perform CA do not scale to large, high-dimensional datasets. By re-interpreting the objective in CA using an information-theoretic tool called the principal inertia components, we demonstrate that performing CA is equivalent to solving a functional optimization problem over the space of finite variance functions of two random variable. We show that this optimization problem, in turn, can be efficiently approximated by neural networks. The resulting formulation, called the correspondence analysis neural network (CA-NN), enables CA to be performed at an unprecedented scale. We validate the CA-NN on synthetic data, and demonstrate how it can be used to perform CA on a variety of datasets, including food recipes, wine compositions, and images. Our results outperform traditional methods used in CA, indicating that CA-NN can serve as a new, scalable tool for interpretability and visualization of complex dependencies between random variables.


Deep Orthogonal Representations: Fundamental Properties and Applications

Hsu, Hsiang, Salamatian, Salman, Calmon, Flavio P.

arXiv.org Machine Learning

Several representation learning and, more broadly, dimensionality reduction techniques seek to produce representations of the data that are orthogonal (uncorrelated). Examples include PCA, CCA, Kernel/Deep CCA, the ACE algorithm and correspondence analysis (CA). For a fixed data distribution, all finite variance representations belong to the same function space regardless of how they are derived. In this work, we present a theoretical framework for analyzing this function space, and demonstrate how a basis for this space can be found using neural networks. We show that this framework (i) underlies recent multi-view representation learning methods, (ii) enables classical exploratory statistical techniques such as CA to be scaled via neural networks, and (iii) can be used to derive new methods for comparing black-box models. We illustrate these applications empirically through different datasets.